Automated offensive language detection is essential in combating the spread of hate speech, particularly in social media. This paper describes our work on Offensive Language Identification in low resource Indic language Marathi. The problem is formulated as a text classification task to identify a tweet as offensive or non-offensive. We evaluate different mono-lingual and multi-lingual BERT models on this classification task, focusing on BERT models pre-trained with social media datasets. We compare the performance of MuRIL, MahaTweetBERT, MahaTweetBERT-Hateful, and MahaBERT on the HASOC 2022 test set. We also explore external data augmentation from other existing Marathi hate speech corpus HASOC 2021 and L3Cube-MahaHate. The MahaTweetBERT, a BERT model, pre-trained on Marathi tweets when fine-tuned on the combined dataset (HASOC 2021 + HASOC 2022 + MahaHate), outperforms all models with an F1 score of 98.43 on the HASOC 2022 test set. With this, we also provide a new state-of-the-art result on HASOC 2022 / MOLD v2 test set.
translated by 谷歌翻译
Pre-training large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. Although this method has proven to be effective for many domains, it might not always provide desirable benefits. In this paper, we study the effects of hateful pre-training on low-resource hate speech classification tasks. While previous studies on the English language have emphasized its importance, we aim to augment their observations with some non-obvious insights. We evaluate different variations of tweet-based BERT models pre-trained on hateful, non-hateful, and mixed subsets of a 40M tweet dataset. This evaluation is carried out for the Indian languages Hindi and Marathi. This paper is empirical evidence that hateful pre-training is not the best pre-training option for hate speech detection. We show that pre-training on non-hateful text from the target domain provides similar or better results. Further, we introduce HindTweetBERT and MahaTweetBERT, the first publicly available BERT models pre-trained on Hindi and Marathi tweets, respectively. We show that they provide state-of-the-art performance on hate speech classification tasks. We also release hateful BERT for the two languages and a gold hate speech evaluation benchmark HateEval-Hi and HateEval-Mr consisting of manually labeled 2000 tweets each. The models and data are available at https://github.com/l3cube-pune/MarathiNLP .
translated by 谷歌翻译
Spatial understanding is a fundamental aspect of computer vision and integral for human-level reasoning about images, making it an important component for grounded language understanding. While recent large-scale text-to-image synthesis (T2I) models have shown unprecedented improvements in photorealism, it is unclear whether they have reliable spatial understanding capabilities. We investigate the ability of T2I models to generate correct spatial relationships among objects and present VISOR, an evaluation metric that captures how accurately the spatial relationship described in text is generated in the image. To benchmark existing models, we introduce a large-scale challenge dataset SR2D that contains sentences describing two objects and the spatial relationship between them. We construct and harness an automated evaluation pipeline that employs computer vision to recognize objects and their spatial relationships, and we employ it in a large-scale evaluation of T2I models. Our experiments reveal a surprising finding that, although recent state-of-the-art T2I models exhibit high image quality, they are severely limited in their ability to generate multiple objects or the specified spatial relations such as left/right/above/below. Our analyses demonstrate several biases and artifacts of T2I models such as the difficulty with generating multiple objects, a bias towards generating the first object mentioned, spatially inconsistent outputs for equivalent relationships, and a correlation between object co-occurrence and spatial understanding capabilities. We conduct a human study that shows the alignment between VISOR and human judgment about spatial understanding. We offer the SR2D dataset and the VISOR metric to the community in support of T2I spatial reasoning research.
translated by 谷歌翻译
Videos often capture objects, their visible properties, their motion, and the interactions between different objects. Objects also have physical properties such as mass, which the imaging pipeline is unable to directly capture. However, these properties can be estimated by utilizing cues from relative object motion and the dynamics introduced by collisions. In this paper, we introduce CRIPP-VQA, a new video question answering dataset for reasoning about the implicit physical properties of objects in a scene. CRIPP-VQA contains videos of objects in motion, annotated with questions that involve counterfactual reasoning about the effect of actions, questions about planning in order to reach a goal, and descriptive questions about visible properties of objects. The CRIPP-VQA test set enables evaluation under several out-of-distribution settings -- videos with objects with masses, coefficients of friction, and initial velocities that are not observed in the training distribution. Our experiments reveal a surprising and significant performance gap in terms of answering questions about implicit properties (the focus of this paper) and explicit properties of objects (the focus of prior work).
translated by 谷歌翻译
无人驾驶飞机在当天变得越来越流行,对它们的申请越过科学和工业的界限,从航空摄影到包装交付再到灾难管理,从该技术中受益。但是在它们变得司空见惯之前,要解决的挑战要使它们可靠和安全。以下论文讨论了与无人驾驶飞机的精确着陆相关的挑战,包括传感和控制的方法及其在各种应用中的优点和缺点。
translated by 谷歌翻译
随着无人机技术的改进,从监视到航空摄影再到包装交付的这些多功能自动驾驶汽车,已经发现了越来越多的用途,并且这些应用都带来了独特的挑战。本文实施了一个这样一个挑战的解决方案:降落在移动目标上。此问题以前已经通过不同程度的成功解决了,但是大多数实施都集中在室内应用程序上。室外以风和照明等变量的形式提出了更大的挑战,室外无人机更重,更容易受到惯性效应的影响。我们的方法纯粹是基于视觉的,使用单眼摄像机和基准标记来定位无人机和PID控制,以跟随和降落在平台上。
translated by 谷歌翻译
在智能交通系统中,交通拥堵异常检测至关重要。运输机构的目标有两个方面:监视感兴趣领域的一般交通状况,并在异常拥堵状态下定位道路细分市场。建模拥塞模式可以实现这些目标,以实现全市道路的目标,相当于学习多元时间序列(MTS)的分布。但是,现有作品要么不可伸缩,要么无法同时捕获MTS中的空间信息。为此,我们提出了一个由数据驱动的生成方法组成的原则性和全面的框架,该方法可以执行可拖动的密度估计来检测流量异常。我们的方法在特征空间中的第一群段段,然后使用条件归一化流以在无监督的设置下在群集级别识别异常的时间快照。然后,我们通过在异常群集上使用内核密度估计器来识别段级别的异常。关于合成数据集的广泛实验表明,我们的方法在召回和F1得分方面显着优于几种最新的拥塞异常检测和诊断方法。我们还使用生成模型来采样标记的数据,该数据可以在有监督的环境中训练分类器,从而减轻缺乏在稀疏设置中进行异常检测的标记数据。
translated by 谷歌翻译
阅读和驾驶等日常任务的核心是主动对象识别。目前无法合并时间来阻碍建模此类任务的尝试。人们在速度和准确性之间表现出灵活的权衡,而这种权衡是至关重要的人类技能。深层神经网络已成为预测人类对象识别峰值和神经活动的有前途的候选人。但是,建模时间维度,即速度准确性权衡(SAT),对于它们作为人类如何识别对象的有用计算模型至关重要。为此,我们在这里介绍了第一个大规模(148个观察者,4个神经网络,8个任务)数据集,该数据集是识别Imagenet图像时速度准确性折衷(SAT)。在每个人类试验中,哔哔声表示所需的反应时间,在显示图像后以固定的延迟发出声音,并且观察者的响应仅在哔哔声附近发生时才计算。在一系列块中,我们测试了许多蜂鸣延迟,即反应时间。我们观察到人类的准确性随反应时间的增加而增加,并继续将其特征与能够推理时间自适应计算的几个动态神经网络的行为进行比较。我们将FLOPS作为反应时间的模拟,我们将网络与人类在曲线拟合误差,类别相关性和曲线陡度中进行比较,并得出结论,级联的动态神经网络是对象识别任务中人类反应时间的有希望的模型。
translated by 谷歌翻译
为了在单一源领域的概括中取得成功,最大化合成域的多样性已成为最有效的策略之一。最近的许多成功都来自预先指定模型在培训期间暴露于多样性类型的方法,因此它最终可以很好地概括为新领域。但是,基于na \“基于多样性的增强也不能因为它们无法对大型域移动建模,或者因为预先指定的变换的跨度不能涵盖域概括中通常发生的转移类型。解决这个问题,我们提出了一个新颖的框架,该框架使用神经网络使用对抗学习的转换(ALT)来建模可欺骗分类器的合理但硬的图像转换。该网络是为每个批次的随机初始初始初始初始初始初始化的,并培训了固定数量的步骤。为了最大化分类错误。此外,我们在分类器对干净和转化的图像的预测之间实现一致性。通过广泛的经验分析,我们发现这种对抗性转换的新形式同时实现了多样性和硬度的目标,并超越了所有现有技术,以实现竞争性的所有技术单源域概括的基准。我们还显示了T HAT ALT可以自然地与现有的多样性模块合作,从而产生高度独特的源域,导致最先进的性能。
translated by 谷歌翻译
在本文中,我们提出了一种用于图像剪接检测的新型社会启发卷积神经网络(CNN)深度学习模型。基于从检测到粗略拼接图像区域的前提是可以改善视觉上不可察觉的剪接图像锻炼的检测,所提出的模型称为MissMarple,是涉及特征转移学习的双CNN网络。通过培训和测试所提出的模型,使用哥伦比亚剪接,WildWeb,DSO1和拟议数据集的培训和测试所提出的模型,标题为Abhas,由现实的剪接锻炼组成,揭示了现有深度学习模型的检测精度的提高。
translated by 谷歌翻译